Latest news with #deep learning


South China Morning Post
02-08-2025
- Business
- South China Morning Post
Meta changes course on open-source AI as China pushes ahead with advanced models
Facebook parent Meta Platforms, a major proponent of open-source artificial intelligence (AI) models with its Llama family, has indicated it would be more 'careful' going down the open-source road, a move that contrasts with China's embrace of open-source. In fact, China has probably found the path to 'surpass the US in AI' thanks to the momentum in the country's vibrant open-source AI ecosystem, according to Andrew Ng, a renowned computer scientist known for his work in AI and the field of deep learning. Wu, an adjunct professor at Stanford University's computer science department, praised China's open AI ecosystem, where companies compete against each other in a 'Darwinian life-or-death struggle' to advance foundational models. In a post published on the education platform he co-founded, Wu noted that the world's top proprietary models were still from frontier US labs, while the top open models were mostly from China. Chinese companies have been launching open-source models in quick succession in recent weeks. Alibaba Group Holding and Zhipu AI rolled out their latest reasoning and video models this past week. Alibaba claimed its Wan 2.2 video tool was the industry's 'first open-source video generation models incorporating the Mixture-of-Experts (MoE) architecture' to help users unleash film-level creativity. Alibaba owns the South China Morning Post. Crowds seen in front of the Zhipu AI booth during the World Artificial Intelligence Conference in Shanghai. Photo: Handout Zhipu boasted its GLM-4.5 as China's 'most advanced open-source MoE model', as it secured third place globally and first place among both domestic and open-source models based on the average score across '12 representative benchmarks'.


TechCrunch
08-07-2025
- Automotive
- TechCrunch
Alex Kendall of Wayve brings the future of autonomous AI to Disrupt 2025
TechCrunch Disrupt 2025 hits Moscone West in San Francisco from October 27–29, bringing together more than 10,000 startup and VC leaders for a deep dive into the future of technology. One of the most compelling conversations on one of the AI Stages will feature a panel of innovators redefining what intelligent systems can do — and among them is Alex Kendall, co-founder and CEO of Wayve. From research breakthrough to real-world autonomy Kendall founded Wayve in 2017 with a bold vision: to unlock autonomous mobility not through handcrafted rules, but through embodied intelligence. His pioneering research at the University of Cambridge laid the foundation for a new generation of self-driving systems powered by deep learning and computer vision. Under his leadership, Wayve became the first to prove that a machine could learn to interpret its surroundings and make real-time driving decisions — without relying on traditional maps or manual coding. Today, Kendall is leading the charge toward AV2.0, an entirely new architecture for autonomous vehicles that is designed to scale globally. As CEO, he focuses on aligning strategy, research, partnerships, and commercialization to bring intelligent driving systems to the road. With a PhD in Computer Vision and Robotics, award-winning academic work, and recognition on the Forbes 30 Under 30 list, Kendall is a rare blend of scientist, founder, and industry operator. What to expect on the AI Stage While full panel details are still under wraps, Kendall's participation ensures that this session will offer more than just theoretical takes. Expect insights on how embodied intelligence can shift the trajectory of AI, the challenges of building systems that adapt to the real world, and what it takes to commercialize autonomy at scale. Catch Alex Kendall live on one of the two AI stages at TechCrunch Disrupt 2025, happening October 27–29 at Moscone West in San Francisco. The exact session timing to be announced. Register here to join more than 10,000 startup and VC leaders and save up to $675 before prices increase.

ABC News
15-06-2025
- Science
- ABC News
Sydney team develop AI model to identify thoughts from brainwaves
What if you could operate your phone just by thinking about it? And imagine your phone automatically enhancing your concentration and memory? Or even being used to read someone else's mind? It sounds like science fiction, but this technology, called a brain-computer interface, is being supercharged with the advent of artificial intelligence (AI). Australian researchers at Sydney's University of Technology (UTS) are at the forefront of exploring how AI can be used to read our minds. Here's a walk-through how it works. Postdoctoral research fellow Daniel Leong sits in front of a computer at the GrapheneX-UTS Human-centric Artificial Intelligence Centre wearing what looks like a rubber swimming cap with wires coming out of it. The 128 electrodes in the cap are detecting electrical impulses in Dr Leong's brain cells and recording them on a computer. It's called an electroencephalogram (EEG) and it's technology used by doctors to diagnose brain conditions. The UTS team is using it to read his thoughts. A pioneering AI model, developed by Dr Leong, PhD student Charles (Jinzhao) Zhou and his supervisor Chin-Teng Lin, uses deep learning to translate the brain signals from EEG into specific words. Deep learning is a form of AI that uses artificial neural networks to mimic how the human brain works to learn from data, in this case, lots of EEG data. Dr Leong reads the simple sentence "jumping happy just me" slowly and silently on the screen. He also mouths the words, which helps detect them by sending signals to the brain to activate the parts involved in speech. The AI model works instantly to decode the words and come up with a probability ranking, based on what it has learned from lots of EEG waves from 12 volunteers reading texts. At this stage, Professor Lin says the AI model has learned from a limited collection of words and sentences to make it easier to detect individual words. A second type of AI, a large language model, matches the decoded words and corrects mistakes in the EEG coding to come up with a sentence. Large language models, like ChatGPT, have been trained on massive text datasets to understand and generate human-like text. "I am jumping happily, it's just me" is the sentence the AI model has come up with, with no input from Dr Leong apart from his brainwaves. Like a lot of things AI is doing at the moment, it's not perfect. The team is recruiting more people to read text while wearing the EEG cap to refine the AI model. They are also going to attempt to use the AI model to communicate between two people. Twenty years ago a man with quadriplegia had a device implanted in his brain that allowed him to control a mouse cursor on a screen. It was the first time a brain-computer interface had been used to restore functions lost by paralysis. Tech billionaire Elon Musk is working on the modern version of this implantable technology to restore autonomy to people with quadriplegia. A non-invasive EEG brain-computer interface has the obvious advantage of being portable and not requiring surgery, but because it sits outside the brain, the signals are noisy. "There's also some mix up, right? Since the signal you measure on the skull surfaces come from different sources and they mix up together." That's where the AI comes in. It amplifies and filters the brain signals to reduce noise and generate speech markers. Mohit Shivdasani is a bioelectronics expert at the University of NSW. Researchers have been looking for the patterns in biological signals "forever", he said, but now AI can recognise brainwave patterns that have never been identified previously. He said AI, particularly when used in implantable devices, could quickly personalise the brainwaves to how an individual completes a task. "What AI can do is very quickly be able to learn what patterns correspond to what actions in that given person. And a pattern that's revealed in one person may be completely different to a pattern that's revealed in another person," he said. Professor Lin said that's exactly what they are doing to improve their AI model — by using "neurofeedback", which means the AI model tunes into the way different people speak. "To help AI to learn better, we call this technology a kind of AI-human co-learning," he said. The team is achieving about 75 per cent accuracy converting thoughts to text, and Professor Lin said they were aiming for 90 per cent, similar to what the implanted models achieve. Dr Shivdasani said non-invasive EEG that uses mind-reading AI has potential in managing stroke patients in hospitals. "One of the awesome things about the brain is its ability to heal, so I can see a situation is where an autonomous brain-machine interface is used during the rehabilitation phase to allow the brain to keep working and to keep trying for a certain task," he said. If the brain cells regenerate, the patient may no longer require the technology, he said. Helping with speech therapy for people with autism is another potential use. Such rehabilitative uses rely on a "closed loop" brain-computer interface, where real-time feedback comes from the user's brain activity. Leaping into the realm of science fiction is the possibility of this technology to enhance our attention, memory, focus and even emotional regulation. "As scientists, we look at a medical condition and we look at what function has been affected by that medical condition. What is the need of the patient? We then address that unmet need through technology to restore that function back to what it was," Dr Shivdasani said. "After that, the sky's the limit." Before we start operating our phones with our minds or even communicating directly from brain to brain, the technology needs to become more "wearable". No-one is going to walk around in a cap with wires coming out of it. Professor Lin said the technology could interact with devices like the augmented reality glasses on the market. Big tech is already working on earbuds with electrodes to measure brain signals. Then there's our "brain privacy" and other ethical considerations, Dr Shivdasani said. "We have the tools but what are we going use them for? And how ethically are we going to use them? That's with any technology that allows us to do things we've never been able to do."